�Rocks are smarter than cats
because rocks have the sense to go away when you kick them.� Nothing would stop
us agreeing with cognitive scientist, Zenon
Pylyshnyn�s statement unless we first stipulated that intelligence requires
direction, in our case the agency of the human consciousness. It is perhaps
this set of beliefs and desires, and indeed any sense of semantics behind the
syntax which so perplexes the goal of a humanly-conscious machine, as opposed
to a behaviouralists� stimulus-response model of a human (�The question is not
whether machines think, but whether men do� � Skinner). Furthermore, as James
recognised with his analogy of the lover beating a path to his beloved
being so very different to the iron filings attracted to a magnet, we have
truth-obeying reason to help us, as computer scientists Newell and Simon
recognised with their definition of intelligence consisting of the following
steps: specifying a goal; assessing a current situation to see how it differs
from the goal; and applying a set of operation that reduce the difference.
Despite having had an enormous
effect outside the field of experimental psychology, IQ testing has a very
different basis. It escalated from Binet�s
commission by the French government to identify intellectually-handicapped
children needing special education, as he realised that the task required a
test for underlying capacity rather than an exam which focused on
knowledge-based answers. He decided upon three principles:
1.
a complex test would be required because intelligence is
complex
2.
using completely familiar or unfamiliar items eliminates
variations in knowledge, experience, background and teaching standards
3.
using age changes, i.e. development, as the criterion for
selecting items and standardising the results
With
his colleague, Simon, he produced a
list of tasks, which they ordered according to the mean age of child able to
perform the task (e.g. utilising working memory, copying simple diagrams,
supplying opposite analogies, recognising verbal absurdities and rhyme). The
entire Binet-Simon test was devised in this way, using the age at which an item
could be answered by children of a certain age as being indicative of its
difficulty, based on his assumption of intelligence as developing sequentially
with age. This is justifiable in neurophysiological terms: post-natal
myelination of axons (such as in the visual cortex); genetic time-lapsed
switches (with phenomena of intelligence such as language acquisition deteriorating
markedly post-puberty); and the synaptic plasticity of the brain which admits
the incredible number of connections made in the child�s early years. Moreover,
stage-based theories of logical development such as Piaget�s demonstrate marked sequential development, in tests like
seriation and class inclusion, which are utilised in IQ tests.
Ordering
development-sensitive items led to Binet�s fulfilment of the commission, neatly
defining and sorting intellectually-handicapped children quantitatively as
being those with a mental age (MA) markedly lower than their chronological age
(CA), with mental age being defined in terms of the number of items answered (6
items = 5-year old, 12 items = 6-year old etc.). The test�s validity derived
from its success as a predictor for academic success, achieving a correlation
of around +0.5. This has remained a tenet of IQ tests since, including Terman�s
American adaption, with most having an IQ/exam correlation of between +0.4 and
+0.6 (the average being approximately +0.55 (Anastasi)). Stern later extended
this concept to produce his Intelligence Quotient (IQ) using the formula:
IQ |
= |
x |
||
CA |
The test had a variety of implications: it had wide practical benefits in education; demonstrated objectively the jumps in intellectual development; and showed the difficulty for children of performing seemingly simple tasks (such as copying a diamond, which requires a MA of 7, as opposed to copying a square which requires a MA of 5).
It was
Spearman at the turn of the century who brought the debate about intelligence
away from validatable measurements to whatever nebulous quantity it is that we
possess which is versatile enough to be applicable to all these heterogenous
tasks. This is demonstrable by the continuing correlation between marks
achieved by an individual across a wide range of seemingly disparate
intelligence-testing activities. He posited a �general factor� of intelligence,
g, which he measured using factor
analyses, although he admitted the necessity for there to be some relatively
independent specific factors as well. This seems particularly necessary in the
light of the so-called idiot-savants (usually suffering from autism or in some
way handicapped), whose fantastic abilites in one area, such as memory, computation
(e.g. the �calendrial calculators�) or drawing, bely a wholly universal
intelligence despite evident correlation between the variety of tasks employed
by IQ tests. Although the search for g
has advanced little since the turn of the century, it has been possible to
break them down into categories, the two most easily delineated being �verbal�
and �spatial�. This verbal/non-verbal split is employed by most modern tests,
such as the Wechsler tests (WPPSI, WISC and WAIS), and has proved particularly
useful in the study of disorders like autism or brain damage.
The use
of IQ tests has grown enormously. However, given the purely developmental basis
and premises for their design, they tend to be most accurate for
differentiating individual children or as a marker for future ability. Even so,
there have been shown to be reasonably strong correlations between adults� IQs
and their IQs as children.
�Intelligence is what the IQ tests measure� (Tomlin) is the flippant definition with
an element of truth, given intelligence�s elusive but seemingly measurable
quality in humans. As Pinker summarised
it, �Intelligence is the abilty to attain goals in the face obstacles by means
of decision based on rational rules�, which takes into account how very
well-equipped we are to live in our world. It is only by making assumptions
inducted from our wealth of experience that we are able to function so well.
His explanation of how our minds work is founded on this evolutionary
psychology approach, as well as a computational theory of mind where
function-specific modules give us our amazing versatility within defined
limits.